口服食物挑战(OFC)对于准确诊断患者的食物过敏至关重要。但是,患者不愿接受OFC,对于那些这样做的患者,在农村/社区医疗保健环境中,对过敏症患者的使用率有限。通过机器学习方法对OFC结果的预测可以促进在家中食品过敏原的删除,在OFC中改善患者和医师的舒适度,并通过最大程度地减少执行的OFC的数量来节省医疗资源。临床数据是从共同接受1,284个OFC的1,12例患者那里收集的,包括临床因素,包括血清特异性IgE,总IgE,皮肤刺测试(SPTS),症状,性别和年龄。使用这些临床特征,构建了机器学习模型,以预测花生,鸡蛋和牛奶挑战的结果。每种过敏原的最佳性能模型是使用凹入和凸内核(LUCCK)方法创建的,该方法在曲线(AUC)(AUC)下分别用于花生,鸡蛋和牛奶OFC预测为0.76、0.68和0.70, 。通过Shapley添加说明(SHAP)的模型解释表明,特定的IgE以及SPTS的Wheal和Flare值高度预测了OFC结果。该分析的结果表明,机器学习有可能预测OFC结果,并揭示了相关的临床因素进行进一步研究。
translated by 谷歌翻译
Can we take a recurrent neural network (RNN) trained to translate between languages and augment it to support a new natural language without retraining the model from scratch? Can we fix the faulty behavior of the RNN by replacing portions associated with the faulty behavior? Recent works on decomposing a fully connected neural network (FCNN) and convolutional neural network (CNN) into modules have shown the value of engineering deep models in this manner, which is standard in traditional SE but foreign for deep learning models. However, prior works focus on the image-based multiclass classification problems and cannot be applied to RNN due to (a) different layer structures, (b) loop structures, (c) different types of input-output architectures, and (d) usage of both nonlinear and logistic activation functions. In this work, we propose the first approach to decompose an RNN into modules. We study different types of RNNs, i.e., Vanilla, LSTM, and GRU. Further, we show how such RNN modules can be reused and replaced in various scenarios. We evaluate our approach against 5 canonical datasets (i.e., Math QA, Brown Corpus, Wiki-toxicity, Clinc OOS, and Tatoeba) and 4 model variants for each dataset. We found that decomposing a trained model has a small cost (Accuracy: -0.6%, BLEU score: +0.10%). Also, the decomposed modules can be reused and replaced without needing to retrain.
translated by 谷歌翻译
Fairness of machine learning (ML) software has become a major concern in the recent past. Although recent research on testing and improving fairness have demonstrated impact on real-world software, providing fairness guarantee in practice is still lacking. Certification of ML models is challenging because of the complex decision-making process of the models. In this paper, we proposed Fairify, an SMT-based approach to verify individual fairness property in neural network (NN) models. Individual fairness ensures that any two similar individuals get similar treatment irrespective of their protected attributes e.g., race, sex, age. Verifying this fairness property is hard because of the global checking and non-linear computation nodes in NN. We proposed sound approach to make individual fairness verification tractable for the developers. The key idea is that many neurons in the NN always remain inactive when a smaller part of the input domain is considered. So, Fairify leverages whitebox access to the models in production and then apply formal analysis based pruning. Our approach adopts input partitioning and then prunes the NN for each partition to provide fairness certification or counterexample. We leveraged interval arithmetic and activation heuristic of the neurons to perform the pruning as necessary. We evaluated Fairify on 25 real-world neural networks collected from four different sources, and demonstrated the effectiveness, scalability and performance over baseline and closely related work. Fairify is also configurable based on the domain and size of the NN. Our novel formulation of the problem can answer targeted verification queries with relaxations and counterexamples, which have practical implications.
translated by 谷歌翻译
Machine Learning (ML) software has been widely adopted in modern society, with reported fairness implications for minority groups based on race, sex, age, etc. Many recent works have proposed methods to measure and mitigate algorithmic bias in ML models. The existing approaches focus on single classifier-based ML models. However, real-world ML models are often composed of multiple independent or dependent learners in an ensemble (e.g., Random Forest), where the fairness composes in a non-trivial way. How does fairness compose in ensembles? What are the fairness impacts of the learners on the ultimate fairness of the ensemble? Can fair learners result in an unfair ensemble? Furthermore, studies have shown that hyperparameters influence the fairness of ML models. Ensemble hyperparameters are more complex since they affect how learners are combined in different categories of ensembles. Understanding the impact of ensemble hyperparameters on fairness will help programmers design fair ensembles. Today, we do not understand these fully for different ensemble algorithms. In this paper, we comprehensively study popular real-world ensembles: bagging, boosting, stacking and voting. We have developed a benchmark of 168 ensemble models collected from Kaggle on four popular fairness datasets. We use existing fairness metrics to understand the composition of fairness. Our results show that ensembles can be designed to be fairer without using mitigation techniques. We also identify the interplay between fairness composition and data characteristics to guide fair ensemble design. Finally, our benchmark can be leveraged for further research on fair ensembles. To the best of our knowledge, this is one of the first and largest studies on fairness composition in ensembles yet presented in the literature.
translated by 谷歌翻译
不断增加的材料科学文章使得很难从已发表的文献中推断化学结构 - 培训关系。我们使用自然语言处理(NLP)方法从聚合物文献的摘要中自动提取材料属性数据。作为我们管道的组成部分,我们使用240万材料科学摘要培训了一种语言模型的材料,该材料模型在用作文本编码器时,在五分之三命名实体识别数据集中的其他基线模型都优于其他基线模型。使用此管道,我们在60小时内从约130,000个摘要中获得了约300,000个物质记录。分析了提取的数据,分析了各种应用,例如燃料电池,超级电容器和聚合物太阳能电池,以恢复非平凡的见解。通过我们的管道提取的数据可通过https://polymerscholar.org的Web平台提供,该数据可方便地定位摘要中记录的材料属性数据。这项工作证明了自动管道的可行性,该管道从已发布的文献开始,并以一组完整的提取物质属性信息结束。
translated by 谷歌翻译
从数据中学习的定向无环图(DAG)的组合问题最近被构成了纯连续优化问题,它通过基于矩阵指数函数的痕迹利用DAG的可区分无环表征。现有的无环特征基于以下想法:邻接矩阵的功率包含有关步行和周期的信息。在这项工作中,我们提出了一个基于log-determinant(log-det)函数的$ \ textit {根本不同的} $ acyclicity表征,该功能利用了dags的nilpotency属性。为了处理DAG的固有不对称性,我们将日志数据表征的域与$ \ textit {m-matrices} $的集合联系起来,这是与锥体定义的经典日志函数的关键区别积极的矩阵。与先前提出的无环函数相似,我们的表征也是精确且可区分的。但是,与现有特征相比,我们的对数数据函数:(1)更好地检测大周期; (2)行为更好的梯度; (3)它的运行时间在实践中的数量级更快。从优化侧,我们删除了典型的增强拉格朗日方案,并提出了Dagma($ \ textit {ocyclicity} $的M-矩阵{textIt {定向无环形图),这种方法类似于屏障方法的中心路径。 DAGMA的中心路径中的每个点都是通过我们的log-det函数正常的无约束问题的解决方案,然后我们证明在中心路径的极限下,保证解决方案是DAG。最后,我们为$ \ textit {linear} $和$ \ textit {nonlinear} $ sem提供了广泛的实验,并证明我们的方法可以达到针对最先进方法的大加速和较小的结构锤距。
translated by 谷歌翻译
自2016年成立以来,Alexa奖计划使数百名大学生能够通过Socialbot Grand Challenge探索和竞争以发展对话代理商。挑战的目的是建立能够与人类在流行主题上连贯而诱人的代理人20分钟,同时达到至少4.0/5.0的平均评分。但是,由于对话代理商试图帮助用户完成日益复杂的任务,因此需要新的对话AI技术和评估平台。成立于2021年的Alexa奖Taskbot Challenge建立在Socialbot Challenge的成功基础上,通过引入交互式协助人类进行现实世界烹饪和做自己动手做的任务的要求,同时同时使用语音和视觉方式。这项挑战要求TaskBots识别和理解用户的需求,识别和集成任务和域知识,并开发新的方式,不分散用户的注意力,而不必分散他们的任务,以及其他挑战。本文概述了Taskbot挑战赛,描述了使用Cobot Toolkit提供给团队提供的基础架构支持,并总结了参与团队以克服研究挑战所采取的方法。最后,它分析了比赛第一年的竞争任务机器人的性能。
translated by 谷歌翻译
基于概念的黑框模型的解释通常更为直观,让人类理解。基于概念的解释最广泛采用的方法是概念激活向量(CAV)。CAV依靠学习给定模型和概念的某些潜在表示之间的线性关系。线性可分离性通常是隐式假定的,但通常不正确。在这项工作中,我们从基于概念的解释和提出的概念梯度(CG)的最初意图开始,将基于概念的解释扩展到线性概念功能之外。我们表明,对于一般(潜在的非线性)概念,我们可以数学上评估如何影响模型预测的概念的小变化,从而导致基于梯度的解释扩展到概念空间。我们从经验上证明,在玩具示例和现实世界数据集中,CG表现优于CAV。
translated by 谷歌翻译
在硅组织模型中,可以评估磁共振成像的定量模型。这包括对成像生物标志物和组织微结构参数的验证和灵敏度分析。我们提出了一种新的方法来生成心肌微结构的现实数值幻影。我们扩展了以前的研究,该研究考虑了心肌细胞的变异性,心肌细胞(插入式椎间盘)之间的水交换,心肌微结构混乱和四个钣金方向。在该方法的第一阶段,心肌细胞和钣金是通过考虑心肌到骨膜细胞连接的形状变异性和插入式椎间盘而产生的。然后,将薄板汇总和定向在感兴趣的方向上。我们的形态计量学研究表明,数值和真实(文献)心肌细胞数据的体积,长度以及一级和次要轴的分布之间没有显着差异($ p> 0.01 $)。结构相关性分析证实了硅内组织与实际组织的混乱类别相同。此外,心肌细胞的模拟螺旋角(HA)和输入HA(参考值)之间的绝对角度差($ 4.3^\ Circ \ PM 3.1^\ Circ $)与所测量HA之间的绝对角差有很好的一致性使用实验性心脏扩散张量成像(CDTI)和组织学(参考值)(Holmes等,2000)($ 3.7^\ Circ \ PM6.4^\ Circ $)和(Scollan等,1998)($ 4.9) ^\ circ \ pm 14.6^\ circ $)。使用结构张量成像(黄金标准)和实验性CDTI,输入和模拟CDTI的特征向量和模拟CDTI的角度之间的角度距离小于测量角度之间的角度距离。这些结果证实,所提出的方法比以前的研究可以为心肌产生更丰富的数值幻象。
translated by 谷歌翻译
我们开发了BenchPress,这是第一个用于编译器的ML基准生成器,它是在源代码的功能空间表示中可检测的。卧推通过在空序列或现有序列的任何部分中添加新代码,通过共同观察其左和右下文,从而综合编译函数,从而达到出色的汇编速率。卧推操纵基准的生成迈向了所需的目标特征,这对于最先进的合成器(或实际上人类)不可能达到。与(a)clgen-最先进的ML合成器,(b)Clsmith Fuzzer,(c)Srciror Mutator或(d)人写代码相比来自Github。 Benchpress是第一个通过主动学习搜索功能空间的生成器,以生成可以改善下游任务的基准。我们展示了Grewe's等人如何使用台式。与其他技术相比,CPU与GPU启发式模型在台式基准测试中进行训练时可以获得更高的加速。卧推是一个强大的代码生成器:其生成的样品以86%的速度编译,而Clgen的2.33%则以86%的速度编译。从一个空的固定输入开始,台式比CLGEN产生的10倍,可汇编的OpenCL基准测试,这些基准比Clgen更大,并且更具多样性。
translated by 谷歌翻译